skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Singla, Sahil"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In this paper, we propose a general framework to design efficient polynomial-time approximation schemes (EPTASs) for fundamental stochastic combinatorial optimization problems. Technically speaking, our approach relies on presenting tailor-made reductions to a newly introduced multidimensional Santa Claus problem. Even though the single-dimensional version of this problem is already known to be APX-Hard, we prove that an EPTAS can be designed for a constant number of machines and dimensions, which hold for each of our applications. To demonstrate the versatility of our framework, we first study selection-stopping settings to derive an EPTAS for the free-order prophets problem and for its cost-driven generalization, Pandora’s box with commitment. These results constitute the first approximation schemes in the nonadaptive setting and improve on known inefficient polynomial-time approximation schemes (PTASs) for their adaptive variants. Next, turning our attention to stochastic probing problems, we obtain an EPTAS for the adaptive ProbeMax problem as well as for its nonadaptive counterpart. In both cases, state-of-the-art approximability results have been inefficient PTASs (Chen et al. [25], Fu et al. [35]). Funding: This work was supported by the National Science Foundation, Division of Computing and Communication Foundations [Grants 2327010 and 2440113]. 
    more » « less
    Free, publicly-accessible full text available December 11, 2026
  2. Free, publicly-accessible full text available July 13, 2026
  3. Free, publicly-accessible full text available June 15, 2026
  4. The Prophet Inequality and Pandora's Box problems are fundamental stochastic problem with applications in Mechanism Design, Online Algorithms, Stochastic Optimization, Optimal Stopping, and Operations Research. A usual assumption in these works is that the probability distributions of the n underlying random variables are given as input to the algorithm. Since in practice these distributions need to be learned under limited feedback, we initiate the study of such stochastic problems in the Multi-Armed Bandits model. In the Multi-Armed Bandits model we interact with n unknown distributions over T rounds: in round t we play a policy x(t) and only receive the value of x(t) as feedback. The goal is to minimize the regret, which is the difference over T rounds in the total value of the optimal algorithm that knows the distributions vs. the total value of our algorithm that learns the distributions from the limited feedback. Our main results give near-optimal Õ (poly (n) √T) total regret algorithms for both Prophet Inequality and Pandora's Box. Our proofs proceed by maintaining confidence intervals on the unknown indices of the optimal policy. The exploration-exploitation tradeoff prevents us from directly refining these confidence intervals, so the main technique is to design a regret upper bound function that is learnable while playing low-regret Bandit policies. 
    more » « less